6,838 research outputs found

    Formal Semantics of the CHART Transformation Language

    Get PDF

    A GROOVE Solution for the BPMN to BPEL Model Transformation

    Get PDF
    In this paper we present a solution of a model transformation between two standard languages for business process modeling BPMN and BPEL, using the GROOVE tool set. GROOVE is a tool for graph transformations that uses directed, edge labelled simple graphs and the SPO approach [Ren04]. Given a graph grammar (G, P), composed of a start graph G and a set of production rules P, the tool allows to compute a labelled transition system (LTS) corresponding to all possible derivations in this grammar. The tool is freely available for download. The latest version and documentation can be found on the website http://sourceforge.net/projects/groove. The graph grammar presented here as well as detailed description of the sample realization to the case study is available in the attachment

    Setting-up early computer programs: D. H. Lehmer's ENIAC computation

    Get PDF
    A complete reconstruction of Lehmer's ENIAC set-up for computing the exponents of p modulo two is given. This program served as an early test program for the ENIAC (1946). The reconstruction illustrates the difficulties of early programmers to find a way between a man operated and a machine operated computation. These difficulties concern both the content level (the algorithm) and the formal level (the logic of sequencing operations)

    Optimisation of traffic accident statistics

    Get PDF
    The OPTIMA project or the “Optimisation of traffic accident statistics”, initiated by the DWTC1, is part of a strategy to obtain the necessary means to establish a traffic safety policy. A policy on traffic safety should be a reliable and representative reflection of safety issues. This makes traffic accident data an essential element in making policy decisions on traffic safety. In this sense, the availability of reliable and representative statistical material is the basis upon which traffic safety policy must be founded. The project objective is to obtain more complete and more representative traffic accident statistics by linking hospital records with existing police records and comparing the hospital data with available police information. Part 1 of the project, the description of the existing situation, goes through a series of steps. The introductory text explores the problem of the current incom-pleteness of recorded data in Belgium. This is followed by an international investigation of recording methods in the Nether-lands, Sweden, Great Britain and the USA. This section provides a more detailed description of hospital records and the concurrence between hospital and police records. In the following report the current Belgian process for hospital records, as well as the pro-cedure through which the hospital notifies the police will be set out. This part will end with a series of policy suggestions, based on the description of the weaknesses of the existing formalities for records. Part 2 of the project outlines a demonstration record system for traffic casualties in hospitals. The aim is to introduce this demo into an emergency admission service and to extend it to a day clinic at a later stage. At the same time, the possibility of coupling hospital data with police data will be explored. Foreign experience with traffic casualty records will be put to use in this experiment. Alongside the de-monstration, the possibility of recording traffic casualties through primary care services will also be examined. Part 3 features policy proposals and validates the research results. This inception report looks at the state of affairs in part 1 of the research project, and more specifically at the problem of the current under-recording of traffic casualties in Belgium and at recording methods in the Netherlands, Sweden, Great Britain and the USA

    The use of digital maps for the evaluation and improvement of a bicycle-network and infrastructure

    Get PDF
    Een duurzaam mobiliteitsbeleid gaat gepaard met het stimuleren van fietsgebruik. Potentiële fietsers haken echter vaak af omdat de veiligheid en het rijcomfort van fietspaden langs de Vlaamse wegen te wensen over laat. Een oplossing om de veiligheid van fietspaden te verhogen, is het optimaliseren van de kwaliteit van de fietsinfrastructuur langsheen een fietsroutenetwerk. Rekening houdend met de omvang van zo’n fietsroutenetwerk, is het essentieel om te bepalen op welke locaties de nood aan een verbetering van de infrastructuur van prioritair belang is. In dit artikel wordt een methodologie voorgesteld voor de evaluatie van de fietspadeninfrastructuur en het detecteren van de ernst van knelpunten langsheen het fietsroutenetwerk. De ontwikkelde methodologie berekent de knelpunten in het netwerk aan de hand van een Geografisch Informatie Systeem (GIS). De knelpunten worden bepaald door de afwijking te berekenen van de bestaande fietsinfrastructuur ten opzichte van een vereiste - en dus veiligere - infrastructuur. Als toetsingscriteria werd het Vademecum Fietsvoorziening, een document van de Vlaamse overheid, gebruikt. Dit vademecum beschrijft de vereiste fietsinfrastructuur afhankelijk van karakteristieken van de aanliggende weg. Een eerste stap is het selecteren van alle relevante criteria die bepalend zijn voor de veiligheid van het fietspad. Vervolgens wordt een inventaris opgesteld van alle attributen langsheen het wegennetwerk. Elk attribuut (bijv. de breedte van het fietspad) wordt geëvalueerd een draagt geheel of gedeeltelijke bij tot de ernst van een knelpunt. Aan de hand van een multi-criteria analyse wordt een knelpuntenscore berekend voor elk stuk fietspad in het netwerk. De resultaten worden gevisualiseerd op een kaart. Dit onderzoek kadert binnen het mobiliteitsbeleid van de stad Gent, en is een deel van een prioriteitenkaart die aanduid welke fietspaden als eerste dienen (her)aangelegd te worden

    Elastic-Net Regularization in Learning Theory

    Get PDF
    Within the framework of statistical learning theory we analyze in detail the so-called elastic-net regularization scheme proposed by Zou and Hastie for the selection of groups of correlated variables. To investigate on the statistical properties of this scheme and in particular on its consistency properties, we set up a suitable mathematical framework. Our setting is random-design regression where we allow the response variable to be vector-valued and we consider prediction functions which are linear combination of elements ({\em features}) in an infinite-dimensional dictionary. Under the assumption that the regression function admits a sparse representation on the dictionary, we prove that there exists a particular ``{\em elastic-net representation}'' of the regression function such that, if the number of data increases, the elastic-net estimator is consistent not only for prediction but also for variable/feature selection. Our results include finite-sample bounds and an adaptive scheme to select the regularization parameter. Moreover, using convex analysis tools, we derive an iterative thresholding algorithm for computing the elastic-net solution which is different from the optimization procedure originally proposed by Zou and HastieComment: 32 pages, 3 figure

    An approach for uncertainty aggregation using generalised conjunction/disjunction aggregators

    Get PDF
    Decision Support Systems are often used in the area of system evaluation. The quality of the output of such a system is only as good as the quality of the data that is used as input. Uncertainty on data, if not taken into account, can lead to evaluation results that are not representative. In this paper, we propose a technique to extend Generalised Con- junction/Disjunction aggregators to deal with un- certainty in Decision Support Systems. We first de- fine the logic properties of uncertainty aggregation through reasoning on strict aggregators and after- wards extend this logic to partial aggregators

    A Regularized Method for Selecting Nested Groups of Relevant Genes from Microarray Data

    Full text link
    Gene expression analysis aims at identifying the genes able to accurately predict biological parameters like, for example, disease subtyping or progression. While accurate prediction can be achieved by means of many different techniques, gene identification, due to gene correlation and the limited number of available samples, is a much more elusive problem. Small changes in the expression values often produce different gene lists, and solutions which are both sparse and stable are difficult to obtain. We propose a two-stage regularization method able to learn linear models characterized by a high prediction performance. By varying a suitable parameter these linear models allow to trade sparsity for the inclusion of correlated genes and to produce gene lists which are almost perfectly nested. Experimental results on synthetic and microarray data confirm the interesting properties of the proposed method and its potential as a starting point for further biological investigationsComment: 17 pages, 8 Post-script figure
    • …
    corecore